24 research outputs found

    A new approach to segmentation of 2D range scans into linear regions

    No full text

    Orthogonal 3D-SLAM for Indoor Environments Using Right Angle Corners

    No full text
    Soon, in many service robotic applications, a realtime localization and 3D-mapping capability will be necessary for autonomous navigation. Toward a light and practical SLAM algorithm for indoor scenarios, we propose a fast SLAM algorithm which benefits from sensor geometry for feature extraction and enhance the mapping process using dominant orthogonality in the engineered structures of man-made environments. Range images obtained using a nodding SICK are segmented into planar patches with polygonal boundaries in linear time. Right corner features are constructed based on the recognized orthogonal planes and used for robot localization. In addition to these corners, the map also contains planar patches with inner and outer boundaries for 3D modeling and recognition of the major building structures. Experiments using a mobile robot in our laboratory hallway prove the effectiveness of our approach. Results of the algorithm are compared with hand-measured ground truth

    Extrinsic Self Calibration of a Camera and a 3D Laser Range Finder from Natural Scenes

    No full text
    Abstract — In this paper, we describe a new approach for the extrinsic calibration of a camera with a 3D laser range finder, that can be done on the fly. This approach does not require any calibration object. Only few point correspondences are used, which are manually selected by the user from a scene viewed by the two sensors. The proposed method relies on a novel technique to visualize the range information obtained from a 3D laser scanner. This technique converts the visually ambiguous 3D range information into a 2D map where natural features of a scene are highlighted. We show that by enhancing the features the user can easily find the corresponding points of the camera image points. Therefore, visually identifying lasercamera correspondences becomes as easy as image pairing. Once point correspondences are given, extrinsic calibration is done using the well-known PnP algorithm followed by a nonlinear refinement process. We show the performance of our approach through experimental results. In these experiments, we will use an omnidirectional camera. The implication of this method is important because it brings 3D computer vision systems out of the laboratory and into practical use. I
    corecore